Applied Bayesian Analyses in R

Part 3: time to mix

Sven De Maeyer

Model comparison

Leave-one-out cross-validation

Key idea:

  • leave one data point out of the data

  • re-fit the model

  • predict the value for that one data point and compare with observed value

  • re-do this n times

loo package

Leave-one-out as described is almost impossible!

loo uses a “shortcut making use of the mathematics of Bayesian inference” 1

Result: (): “expected log predictive density” (higher () implies better model fit without being sensitive for over-fitting!)

loo code

loo_Mod1 <- loo(MarathonTimes_Mod1)
loo_Mod2 <- loo(MarathonTimes_Mod2)

Comparison<- 
  loo_compare(
    loo_Mod1, 
    loo_Mod2
    )

print(Comparison)

loo code

Bayesian Mixed Effects Model

New example data WritingData.RData

  • Experimental study on Writing instructions

  • 2 conditions:

    • Control condition (Business as usual)
    • Experimental condition (Observational learning)

Your Turn

  • Open WritingData.RData

  • Estimate 3 models with SecondVersion as dependent variable

    • M1: fixed effect of FirstVersion_GM + random effect of Class ((1|Class))
    • M2: M1 + random effect of FirstVersion_GM ((1 + FirstVersion_GM |Class))
    • M3: M2 + fixed effect of Experimental_condition
  • Compare the models on their fit

  • What do we learn?

  • Make a summary of the best fitting model

Divergent transitions…

  • Something to worry about!

  • Essentially: sampling of parameter estimate values went wrong

  • Fixes:

    • sometimes fine-tuning the sampling algorithm (e.g., control = list(adapt_delta = 0.9)) works
    • sometimes you need more informative priors
    • sometimes the model is just not a good model

Let’s re-consider the priors

Questions?


Do not hesitate to contact me!


THANK YOU!